110 research outputs found

    Maximum Likelihood Methods for Inverse Learning of Optimal Controllers

    Full text link
    This paper presents a framework for inverse learning of objective functions for constrained optimal control problems, which is based on the Karush-Kuhn-Tucker (KKT) conditions. We discuss three variants corresponding to different model assumptions and computational complexities. The first method uses a convex relaxation of the KKT conditions and serves as the benchmark. The main contribution of this paper is the proposition of two learning methods that combine the KKT conditions with maximum likelihood estimation. The key benefit of this combination is the systematic treatment of constraints for learning from noisy data with a branch-and-bound algorithm using likelihood arguments. This paper discusses theoretic properties of the learning methods and presents simulation results that highlight the advantages of using the maximum likelihood formulation for learning objective functions.Comment: 21st IFAC World Congres

    Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling

    Full text link
    Tight performance specifications in combination with operational constraints make model predictive control (MPC) the method of choice in various industries. As the performance of an MPC controller depends on a sufficiently accurate objective and prediction model of the process, a significant effort in the MPC design procedure is dedicated to modeling and identification. Driven by the increasing amount of available system data and advances in the field of machine learning, data-driven MPC techniques have been developed to facilitate the MPC controller design. While these methods are able to leverage available data, they typically do not provide principled mechanisms to automatically trade off exploitation of available data and exploration to improve and update the objective and prediction model. To this end, we present a learning-based MPC formulation using posterior sampling techniques, which provides finite-time regret bounds on the learning performance while being simple to implement using off-the-shelf MPC software and algorithms. The performance analysis of the method is based on posterior sampling theory and its practical efficiency is illustrated using a numerical example of a highly nonlinear dynamical car-trailer system

    Quantization Design for Distributed Optimization

    Get PDF
    We consider the problem of solving a distributed optimization problem using a distributed computing platform, where the communication in the network is limited: each node can only communicate with its neighbours and the channel has a limited data-rate. A common technique to address the latter limitation is to apply quantization to the exchanged information. We propose two distributed optimization algorithms with an iteratively refining quantization design based on the inexact proximal gradient method and its accelerated variant. We show that if the parameters of the quantizers, i.e. the number of bits and the initial quantization intervals, satisfy certain conditions, then the quantization error is bounded by a linearly decreasing function and the convergence of the distributed algorithms is guaranteed. Furthermore, we prove that after imposing the quantization scheme, the distributed algorithms still exhibit a linear convergence rate, and show complexity upper-bounds on the number of iterations to achieve a given accuracy. Finally, we demonstrate the performance of the proposed algorithms and the theoretical findings for solving a distributed optimal control problem

    Stochastic Model Predictive Control for Linear Systems using Probabilistic Reachable Sets

    Full text link
    In this paper we propose a stochastic model predictive control (MPC) algorithm for linear discrete-time systems affected by possibly unbounded additive disturbances and subject to probabilistic constraints. Constraints are treated in analogy to robust MPC using a constraint tightening based on the concept of probabilistic reachable sets, which is shown to provide closed-loop fulfillment of chance constraints under a unimodality assumption on the disturbance distribution. A control scheme reverting to a backup solution from a previous time step in case of infeasibility is proposed, for which an asymptotic average performance bound is derived. Two examples illustrate the approach, highlighting closed-loop chance constraint satisfaction and the benefits of the proposed controller in the presence of unmodeled disturbances.Comment: 57th IEEE Conference on Decision and Control, 201

    Generalised Regret Optimal Controller Synthesis for Constrained Systems

    Full text link
    This paper presents a synthesis method for the generalised dynamic regret problem, comparing the performance of a strictly causal controller to the optimal non-causal controller under a weighted disturbance. This framework encompasses both the dynamic regret problem, considering the difference of the incurred costs, as well as the competitive ratio, which considers their ratio, and which have both been proposed as inherently adaptive alternatives to classical control methods. Furthermore, we extend the synthesis to the case of pointwise-in-time bounds on the disturbance and show that the optimal solution is no worse than the bounded energy optimal solution and is lower bounded by a constant factor, which is only dependent on the disturbance weight. The proposed optimisation-based synthesis allows considering systems subject to state and input constraints. Finally, we provide a numerical example which compares the synthesised controller performance to H2\mathcal{H}_2- and H∞\mathcal{H}_\infty-controllers.Comment: Accepted at IFAC WC 202

    Performance and safety of Bayesian model predictive control: Scalable model-based RL with guarantees

    Full text link
    Despite the success of reinforcement learning (RL) in various research fields, relatively few algorithms have been applied to industrial control applications. The reason for this unexplored potential is partly related to the significant required tuning effort, large numbers of required learning episodes, i.e. experiments, and the limited availability of RL methods that can address high dimensional and safety-critical dynamical systems with continuous state and action spaces. By building on model predictive control (MPC) concepts, we propose a cautious model-based reinforcement learning algorithm to mitigate these limitations. While the underlying policy of the approach can be efficiently implemented in the form of a standard MPC controller, data-efficient learning is achieved through posterior sampling techniques. We provide a rigorous performance analysis of the resulting `Bayesian MPC' algorithm by establishing Lipschitz continuity of the corresponding future reward function and bound the expected number of unsafe learning episodes using an exact penalty soft-constrained MPC formulation. The efficiency and scalability of the method are illustrated using a 100-dimensional server cooling example and a nonlinear 10-dimensional drone example by comparing the performance against nominal posterior MPC, which is commonly used for data-driven control of constrained dynamical systems
    • …
    corecore